279 research outputs found

    TechMiner: Extracting Technologies from Academic Publications

    Get PDF
    In recent years we have seen the emergence of a variety of scholarly datasets. Typically these capture ‘standard’ scholarly entities and their connections, such as authors, affiliations, venues, publications, citations, and others. However, as the repositories grow and the technology improves, researchers are adding new entities to these repositories to develop a richer model of the scholarly domain. In this paper, we introduce TechMiner, a new approach, which combines NLP, machine learning and semantic technologies, for mining technologies from research publications and generating an OWL ontology describing their relationships with other research entities. The resulting knowledge base can support a number of tasks, such as: richer semantic search, which can exploit the technology dimension to support better retrieval of publications; richer expert search; monitoring the emergence and impact of new technologies, both within and across scientific fields; studying the scholarly dynamics associated with the emergence of new technologies; and others. TechMiner was evaluated on a manually annotated gold standard and the results indicate that it significantly outperforms alternative NLP approaches and that its semantic features improve performance significantly with respect to both recall and precision

    Building interpretable models for polypharmacy prediction in older chronic patients based on drug prescription records

    Get PDF
    © 2018 Kocbek et al. Background. Multimorbidity presents an increasingly common problem in older population, and is tightly related to polypharmacy, i.e., concurrent use of multiple medications by one individual. Detecting polypharmacy from drug prescription records is not only related to multimorbidity, but can also point at incorrect use of medicines. In this work, we build models for predicting polypharmacy from drug prescription records for newly diagnosed chronic patients. We evaluate the models' performance with a strong focus on interpretability of the results. Methods. A centrally collected nationwide dataset of prescription records was used to perform electronic phenotyping of patients for the following two chronic conditions: Type 2 diabetes mellitus (T2D) and cardiovascular disease (CVD). In addition, a hospital discharge dataset was linked to the prescription records. A regularized regression model was built for 11 different experimental scenarios on two datasets, and complexity of the model was controlled with a maximum number of dimensions (MND) parameter. Performance and interpretability of the model were evaluated with AUC, AUPRC, calibration plots, and interpretation by a medical doctor. Results. For the CVD model, AUC and AUPRC values of 0.900 (95% [0.898-0.901]) and 0.640 (0.635-0.645) were reached, respectively, while for the T2D model the values were 0.808 (0.803-0.812) and 0.732 (0.725-0.739). Reducing complexity of the model by 65% and 48% for CVD and T2D, resulted in 3% and 4% lower AUC, and 4% and 5% lower AUPRC values, respectively. Calibration plots for our models showed that we can achieve moderate calibration with reducing the models' complexity without significant loss of predictive performance. Discussion. In this study, we found that it is possible to use drug prescription data to build a model for polypharmacy prediction in older population. In addition, the study showed that it is possible to find a balance between good performance and interpretability of the model, and achieve acceptable calibration at the same time

    Identification of drought extent using NVSWI and VHI in Iaşi county area, Romania

    Get PDF
    Drought is a stochastic natural phenomenon that appears from considerable lacking in precipitation. Among natural hazards, drought is known to provoke extensive damage and affects a important number of people. Techniques for observing agricultural drought from R.S. are indirect. These depend on using images based parameters to exemplifed soil moisture condition when the soil is often obscured by a vegetation cover. The procedure are mainly based on determing vegetation health or greenness using VI , often in combination with canopy temperature anomalies using thermal infrared wavebands. In this study were used remote sensing images from the Landsat 8 OLI, taken in may and june 2017. The study area was the county of Iasi. To evaluate drought in this study, for Iasi county, Normalized Vegetation Supply Water Index (NVSWI) and Vegetation Health Index (VHI), were used. VSWI is derived from The Vegetation Supply Water Index (VSWI). This index was developed to combine the NDVI and the land surface temperature (LST) to detect the moisture condition. VHI was developed through a combination of Vegetation Condition Index (VCI), one of the important vegetation indicators when monitoring weather-related variations, such as droughts, and Temperature Condition Index (TCI), which reflects the stress of temperature, that both indicies can be successfully used to determine the spatiotemporal extent of agricultural drought. After applying NVSWI to determine the degree of drought we noticed that for the satellite image of May prevailed “slight drought” and for june “normal”. Second index, VHI indicate that in both months, may and june, is “no drought”. It can be concluded that VHI is a very good indicator for studing extreme drought and NVSWI offer information about areas “normal” and “wet”

    A novel EGFR inhibitor acts as potent tool for hypoxia-activated prodrug systems and exerts strong synergistic activity with VEGFR inhibition in vitro and in vivo

    Get PDF
    Small-molecule EGFR inhibitors have distinctly improved the overall survival especially in EGFR-mutated lung cancer. However, their use is often limited by severe adverse effects and rapid resistance development. To overcome these limitations, a hypoxia-activatable Co(III)-based prodrug (KP2334) was recently synthesized releasing the new EGFR inhibitor KP2187 in a highly tumor-specific manner only in hypoxic areas of the tumor. However, the chemical modifications in KP2187 necessary for cobalt chelation could potentially interfere with its EGFR-binding ability. Consequently, in this study, the biological activity and EGFR inhibition potential of KP2187 was compared to clinically approved EGFR inhibitors. In general, the activity as well as EGFR binding (shown in docking studies) was very similar to erlotinib and gefitinib (while other EGFR-inhibitory drugs behaved different) indicating no interference of the chelating moiety with the EGFR binding. Moreover, KP2187 significantly inhibited cancer cell proliferation as well as EGFR pathway activation in vitro and in vivo. Finally, KP2187 proved to be highly synergistic with VEGFR inhibitors such as sunitinib. This indicates that KP2187releasing hypoxia-activated prodrug systems are promising candidates to overcome the clinically observed enhanced toxicity of EGFR-VEGFR inhibitor combination therapies

    Полиненасыщенные жирные кислоты как универсальные эндогенные биорегуляторы

    Get PDF
    Studying of a physiological role of natural polyunsaturated fatty acids (PUFAs) at a new step of development of scientific and technical progress is actual. The present review is devoted questions of biosynthesis PUFAs, their regulatory functions, distribution in tissues lipids of an animal organism. In the review new effective methods of isolation and identification of unsaturated fatty acids and their metabolites from biological objects, such as extraction under pressure, a capillary nuclear magnetic resonance, a combination of a liquid chromatography and mass spectrometry also are considered.Изучение физиологической роли природных полиненасыщенных жирных кислот (ПНЖК) на новой ступени развития научно-технического прогресса является весьма актуальным. Настоящий обзор посвящен вопросам биосинтеза ПНЖК, их регуляторных функций, распределения в липидах тканей животного организма. В обзоре также рассмотрены новые эффективные методы выделения и идентификации ненасыщенных жирных кислот и их метаболитов из биообъектов, такие как экстракция под давлением, капиллярный ЯМР, жидкостная хроматография в сочетании с масс-спектрометрией

    Theoretical and technological building blocks for an innovation accelerator

    Get PDF
    The scientific system that we use today was devised centuries ago and is inadequate for our current ICT-based society: the peer review system encourages conservatism, journal publications are monolithic and slow, data is often not available to other scientists, and the independent validation of results is limited. Building on the Innovation Accelerator paper by Helbing and Balietti (2011) this paper takes the initial global vision and reviews the theoretical and technological building blocks that can be used for implementing an innovation (in first place: science) accelerator platform driven by re-imagining the science system. The envisioned platform would rest on four pillars: (i) Redesign the incentive scheme to reduce behavior such as conservatism, herding and hyping; (ii) Advance scientific publications by breaking up the monolithic paper unit and introducing other building blocks such as data, tools, experiment workflows, resources; (iii) Use machine readable semantics for publications, debate structures, provenance etc. in order to include the computer as a partner in the scientific process, and (iv) Build an online platform for collaboration, including a network of trust and reputation among the different types of stakeholders in the scientific system: scientists, educators, funding agencies, policy makers, students and industrial innovators among others. Any such improvements to the scientific system must support the entire scientific process (unlike current tools that chop up the scientific process into disconnected pieces), must facilitate and encourage collaboration and interdisciplinarity (again unlike current tools), must facilitate the inclusion of intelligent computing in the scientific process, must facilitate not only the core scientific process, but also accommodate other stakeholders such science policy makers, industrial innovators, and the general public

    The Bone Dysplasia Ontology: integrating genotype and phenotype information in the skeletal dysplasia domain

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Skeletal dysplasias are a rare and heterogeneous group of genetic disorders affecting skeletal development. Patients with skeletal dysplasias suffer from many complex medical issues including degenerative joint disease and neurological complications. Because the data and expertise associated with this field is both sparse and disparate, significant benefits will potentially accrue from the availability of an ontology that provides a shared conceptualisation of the domain knowledge and enables data integration, cross-referencing and advanced reasoning across the relevant but distributed data sources.</p> <p>Results</p> <p>We introduce the design considerations and implementation details of the Bone Dysplasia Ontology. We also describe the different components of the ontology, including a comprehensive and formal representation of the skeletal dysplasia domain as well as the related genotypes and phenotypes. We then briefly describe SKELETOME, a community-driven knowledge curation platform that is underpinned by the Bone Dysplasia Ontology. SKELETOME enables domain experts to use, refine and extend and apply the ontology without any prior ontology engineering experience--to advance the body of knowledge in the skeletal dysplasia field.</p> <p>Conclusions</p> <p>The Bone Dysplasia Ontology represents the most comprehensive structured knowledge source for the skeletal dysplasias domain. It provides the means for integrating and annotating clinical and research data, not only at the generic domain knowledge level, but also at the level of individual patient case studies. It enables links between individual cases and publicly available genotype and phenotype resources based on a community-driven curation process that ensures a shared conceptualisation of the domain knowledge and its continuous incremental evolution.</p
    corecore